Gods, Eugine, you’re boring. You’ve been banned from LW, you can’t accept that, and the best way you can think of to handle this situation is to keep posting the same fucking comments again and again and again and again because … what? What good outcome do you expect from doing this? Literally the only thing you are doing is adding a bit of annoyance to the lives of people who never did you any harm.
Perhaps it makes you feel like you’re outsmarting the moderators or something. I do hope not. Because any idiot can copy and paste things, and anyone a step or two above idiocy can write a script to do it.
Well, that’s a new one. I’ll take it no one’s buying you “everyone who disagrees with me is an ‘asshole’ ” routine anymore so you need a new insult to hurl at people.
Yeah, how could any reasonable person possibly think it’s boring when someone does the exact same thing over and over and over dozens of times?
your “everyone who disagrees with me is an ‘asshole’ routine” [...] you need a new insult
I have never either thought or claimed that everyone who disagrees with me is an asshole, nor have I been in the habit of insulting you despite ample provocation.
Regarding those quotation marks around “asshole”: I just checked every comment I ever made on LW according to Wei_Dai’s fetch-everything tool; the last time I applied the word “asshole” to another LWer by even the most generous criterion was in July 2009, and so far as I know the person in question was not one of your many, many sockpuppets.
Regarding those quotation marks around “asshole”: I just checked every comment I ever made on LW according to Wei_Dai’s fetch-everything tool; the last time I applied the word “asshole” to another LWer by even the most generous criterion was in July 2009
It’s happening not just in USA. In Slovakia there was recently a research published about fake accounts commenting on social networks and comment sections of major newspapers. It’s quite scary how effective the guerrilla information war can be. This is how it works, in a nutshell:
You hire a few people (you really don’t need many of them) working for minimal wage, speaking the language of the target country. Their task is to create dozen accounts on various websites. If names and photos are required, they just make fake names and steal someone’s photo from google images. (If someone finds that out and bans the account, no problem, they create a new one).
Separately from these people, you have a few smart guys who decide what memes they want to spread. They choose the message, compose a few dozen text samples saying the same thing using different words. Then they e-mail the text samples to the minimum-wage people, whose task is essentially to keep copy-pasting these messages on many places with minor modifications, for 8 hours a day.
Imagine a group with e.g. one manager, two smart guys designing the memes, and five stupid guys who copy them. How many comments they can generate each day? How much can they influence a perception of an average reader?
That form of spamming hundreds of websites by minor modifications of messages optimized for an average reader obviously wouldn’t survive on LW. And optimizing messages specifically for LW wouldn’t be cost-effective. The operation needs to scale well; there is a whole internet to take over, and even Kremlin’s propaganda budget has its limits.
But I could imagine a multi-tiered approach. Something like categorizing the websites into a few groups, and crafting messages for each of these groups separately. The categorization itself could actually be quite simple: for each category, you choose a few “prototypical” websites, and then let a neural network assign the right category for everything else. Then you do the same thing—have a smart guy writing a message optimized to impact people in given category, and a stupid guy to spam all websites assigned into given category with small modifications of the message—for each category separately.
With enough categories, I could imagine that one of them could be compatible enough with LW. Probably “rationalists” is too narrow, but something like “educated people” or maybe even “STEM” could get among the first dozen categories.
Whenever a place sinks into the #NeverTrump pit of stupidity, it never crawls back out.
Trump won because the left forgot how to argue rationally, and Trump knew how to make his supporters feel smart for not believing the paranoid moronic more-biased-every-day media. Trump outsmarted every leftist on the planet. At this point, he deserves America, and if he turns out to be half as bad as the libtards say he’s going to be, he’ll be kicked out in seconds. So why care? Why try to work people up into a paranoid frenzy, when that’s failed once before and it’ll fail again and again until the left learns how to admit its many flaws and mistakes and learn from them?
You know what you were saying about “the #NeverTrump pit of stupidity”? Things like “libtard” are the same thing on the other side. Do you seriously believe that that sort of terminology helps produce productive discussions?
A majority of online and social media defenders of Obamacare are professionals who are “paid to post,” according to a digital expert.
“Sixty percent of all the posts were made from 100 profiles, posting between the hours of 9 and 5 Pacific Time,” said Michael Brown. “They were paid to post.”
He began investigating it after his criticism of the former president’s health insurance program posted on the Obamacare Facebook page. He was hit hard by digital activists pretending to be regular people. She reports that he evaluated 226,000 pro-Obamacare posts made by 40,000 Facebook profiles.
“Digital activists are paid employees; their purpose is to attack anyone who’s posting something contrary to the view of the page owner wants expressed,” he told Attkisson. “Sixty percent of all the posts were made from 100 profiles, posting between the hours of 9 and 5 Pacific Time.”
I don’t think so, or just a trivial amount. Looking at downvotes or lack of upvotes, I don’t get that sense at all. Political talk is almost always discouraged and when it does go on, its fairly even handed.
Top half of the article is about politics, but halfway down they start getting into info war and bots and AI.
“THE war of the bots is one of the wilder and weirder aspects of the elections of 2016. At the Oxford Internet Institute’s Unit for Computational Propaganda, its director, Phil Howard, and director of research, Sam Woolley, show me all the ways public opinion can be massaged and manipulated. But is there a smoking gun, I ask them, evidence of who is doing this? “There’s not a smoking gun,” says Howard. “There are smoking machine guns. There are multiple pieces of evidence.”
“Look at this,” he says and shows me how, before the US election, hundreds upon hundreds of websites were set up to blast out just a few links, articles that were all pro-Trump. “This is being done by people who understand information structure, who are bulk buying domain names and then using automation to blast out a certain message.”
One of the things that concerns Howard most is the hundreds of thousands of “sleeper” bots they’ve found. Twitter accounts that have tweeted only once or twice and are now sitting quietly waiting for a trigger: some sort of crisis where they will rise up and come together to drown out all other sources of information.
On its website, Cambridge Analytica makes the astonishing boast that it has psychological profiles based on 5,000 separate pieces of data on 220 million American voters
Bio-psycho-social profiling, I read later, is one offensive in what is called “cognitive warfare”. Though there are many others: “recoding the mass consciousness to turn patriotism into collaborationism,” explains a Nato briefing document on countering Russian disinformation written by an SCL employee. “Time-sensitive professional use of media to propagate narratives,” says one US state department white paper. “Of particular importance to psyop personnel may be publicly and commercially available data from social media platforms.”
Out on Twitter, the new transnational battleground for the future, someone I follow tweets a quote by Marshall McLuhan, the great information theorist of the 60s. “World War III will be a guerrilla information war,” it says. “With no divisions between military and civilian participation.”
This reads like what Scott Adams would call cognitive dissonance.
In particular it omits the fact that the (anti-Trump) CEO’s of twitter and face have access to more data, and presumably better algorithms, but weren’t able to swing the election. Almost as if it’s easier to influence people if the direction you’re trying to influence them agrees with their day-to-day observations. But that’s way to scary a thought for the people at the guardian.
the truth doesn’t change no matter how many times it’s deleted
Your comments aren’t being deleted for being true, nor for being false. They are being deleted because you have been banned from Less Wrong. Over and over again. For gross abuse of the system. And anything, true or false, becomes boring when repeated often enough.
Which means that if you actually cared about persuading people you’d long ago have either given this place up as a bad job and tried elsewhere, or fixed your behaviour in the hope of getting unbanned and being able to have the kind of conversation that occasionally actually changes people’s minds. (I suspect that the faint possibility that one day that might happen is one reason why some of your sockpuppet accounts have lasted longer than a few hours.)
If you genuinely want to convince anyone, that’s how you do it. Otherwise you’re just getting in the way, and teaching everyone the lesson that people who say the sort of things you do are likely to behave the way you do. That is not the lesson you actually want to be teaching.
Most of them are over-and-over-again reposts from the same person, who was banned years ago for abusive behaviour and keeps coming back and being banned again (and creating huge numbers of sockpuppets to downvote people he doesn’t like, which is the main reason why there are currently no downvotes on LW).
[EDITED to clarify:] So what happens is: he posts something, his account gets zapped because he’s supposed to be banned and his comments are deleted, and then he makes a new account and posts all the same things again, repeat ad nauseam. I can’t imagine he thinks he’s actually doing something constructive; I think he does it as a sort of fuck-you to the LW admins.
That would be the person you recently said “Did we just become best friends?” to, by the way.
Gods, Eugine, you’re boring. You’ve been banned from LW, you can’t accept that, and the best way you can think of to handle this situation is to keep posting the same fucking comments again and again and again and again because … what? What good outcome do you expect from doing this? Literally the only thing you are doing is adding a bit of annoyance to the lives of people who never did you any harm.
Perhaps it makes you feel like you’re outsmarting the moderators or something. I do hope not. Because any idiot can copy and paste things, and anyone a step or two above idiocy can write a script to do it.
Get a life and leave us in peace.
Well, that’s a new one. I’ll take it no one’s buying you “everyone who disagrees with me is an ‘asshole’ ” routine anymore so you need a new insult to hurl at people.
Yeah, how could any reasonable person possibly think it’s boring when someone does the exact same thing over and over and over dozens of times?
I have never either thought or claimed that everyone who disagrees with me is an asshole, nor have I been in the habit of insulting you despite ample provocation.
Regarding those quotation marks around “asshole”: I just checked every comment I ever made on LW according to Wei_Dai’s fetch-everything tool; the last time I applied the word “asshole” to another LWer by even the most generous criterion was in July 2009, and so far as I know the person in question was not one of your many, many sockpuppets.
Goes into the “shit LW says” bucket :-D
a
LOL. First, are you quite sure you want Eugine to move in that particular direction? Of course there is that, but still...
Second, I think you notions of what people a step or two above idiocy can achieve are a bit… optimistic.
I rather suspect he already has moved in that particular direction. Could be wrong, though.
Yeah, I’m adopting a definition of idiocy adapted to the local context.
a
It’s happening not just in USA. In Slovakia there was recently a research published about fake accounts commenting on social networks and comment sections of major newspapers. It’s quite scary how effective the guerrilla information war can be. This is how it works, in a nutshell:
You hire a few people (you really don’t need many of them) working for minimal wage, speaking the language of the target country. Their task is to create dozen accounts on various websites. If names and photos are required, they just make fake names and steal someone’s photo from google images. (If someone finds that out and bans the account, no problem, they create a new one).
Separately from these people, you have a few smart guys who decide what memes they want to spread. They choose the message, compose a few dozen text samples saying the same thing using different words. Then they e-mail the text samples to the minimum-wage people, whose task is essentially to keep copy-pasting these messages on many places with minor modifications, for 8 hours a day.
Imagine a group with e.g. one manager, two smart guys designing the memes, and five stupid guys who copy them. How many comments they can generate each day? How much can they influence a perception of an average reader?
Thanks for the information! May I get a link to that research paper? My google-fu is apparently weak.
Sorry, I have a problem finding it now too. :(
Here are a couple of useful links—from the New York Times and the Guardian.
Information wars are quite real. How much actual impact do they have is a different question.
A fun possibility to consider: do we have Russian mole here on LW? X-D
That form of spamming hundreds of websites by minor modifications of messages optimized for an average reader obviously wouldn’t survive on LW. And optimizing messages specifically for LW wouldn’t be cost-effective. The operation needs to scale well; there is a whole internet to take over, and even Kremlin’s propaganda budget has its limits.
But I could imagine a multi-tiered approach. Something like categorizing the websites into a few groups, and crafting messages for each of these groups separately. The categorization itself could actually be quite simple: for each category, you choose a few “prototypical” websites, and then let a neural network assign the right category for everything else. Then you do the same thing—have a smart guy writing a message optimized to impact people in given category, and a stupid guy to spam all websites assigned into given category with small modifications of the message—for each category separately.
With enough categories, I could imagine that one of them could be compatible enough with LW. Probably “rationalists” is too narrow, but something like “educated people” or maybe even “STEM” could get among the first dozen categories.
Whenever a place sinks into the #NeverTrump pit of stupidity, it never crawls back out.
Trump won because the left forgot how to argue rationally, and Trump knew how to make his supporters feel smart for not believing the paranoid moronic more-biased-every-day media. Trump outsmarted every leftist on the planet. At this point, he deserves America, and if he turns out to be half as bad as the libtards say he’s going to be, he’ll be kicked out in seconds. So why care? Why try to work people up into a paranoid frenzy, when that’s failed once before and it’ll fail again and again until the left learns how to admit its many flaws and mistakes and learn from them?
You know what you were saying about “the #NeverTrump pit of stupidity”? Things like “libtard” are the same thing on the other side. Do you seriously believe that that sort of terminology helps produce productive discussions?
In particular, this:
a
a
I don’t think so, or just a trivial amount. Looking at downvotes or lack of upvotes, I don’t get that sense at all. Political talk is almost always discouraged and when it does go on, its fairly even handed.
There ain’t no downvotes any more.
Top half of the article is about politics, but halfway down they start getting into info war and bots and AI.
“THE war of the bots is one of the wilder and weirder aspects of the elections of 2016. At the Oxford Internet Institute’s Unit for Computational Propaganda, its director, Phil Howard, and director of research, Sam Woolley, show me all the ways public opinion can be massaged and manipulated. But is there a smoking gun, I ask them, evidence of who is doing this? “There’s not a smoking gun,” says Howard. “There are smoking machine guns. There are multiple pieces of evidence.”
“Look at this,” he says and shows me how, before the US election, hundreds upon hundreds of websites were set up to blast out just a few links, articles that were all pro-Trump. “This is being done by people who understand information structure, who are bulk buying domain names and then using automation to blast out a certain message.”
One of the things that concerns Howard most is the hundreds of thousands of “sleeper” bots they’ve found. Twitter accounts that have tweeted only once or twice and are now sitting quietly waiting for a trigger: some sort of crisis where they will rise up and come together to drown out all other sources of information.
On its website, Cambridge Analytica makes the astonishing boast that it has psychological profiles based on 5,000 separate pieces of data on 220 million American voters
Bio-psycho-social profiling, I read later, is one offensive in what is called “cognitive warfare”. Though there are many others: “recoding the mass consciousness to turn patriotism into collaborationism,” explains a Nato briefing document on countering Russian disinformation written by an SCL employee. “Time-sensitive professional use of media to propagate narratives,” says one US state department white paper. “Of particular importance to psyop personnel may be publicly and commercially available data from social media platforms.”
Out on Twitter, the new transnational battleground for the future, someone I follow tweets a quote by Marshall McLuhan, the great information theorist of the 60s. “World War III will be a guerrilla information war,” it says. “With no divisions between military and civilian participation.”
By that definition we’re already there.”
a
a
a
This reads like what Scott Adams would call cognitive dissonance.
In particular it omits the fact that the (anti-Trump) CEO’s of twitter and face have access to more data, and presumably better algorithms, but weren’t able to swing the election. Almost as if it’s easier to influence people if the direction you’re trying to influence them agrees with their day-to-day observations. But that’s way to scary a thought for the people at the guardian.
a
a
a
a
a
a
Your comments aren’t being deleted for being true, nor for being false. They are being deleted because you have been banned from Less Wrong. Over and over again. For gross abuse of the system. And anything, true or false, becomes boring when repeated often enough.
Which means that if you actually cared about persuading people you’d long ago have either given this place up as a bad job and tried elsewhere, or fixed your behaviour in the hope of getting unbanned and being able to have the kind of conversation that occasionally actually changes people’s minds. (I suspect that the faint possibility that one day that might happen is one reason why some of your sockpuppet accounts have lasted longer than a few hours.)
If you genuinely want to convince anyone, that’s how you do it. Otherwise you’re just getting in the way, and teaching everyone the lesson that people who say the sort of things you do are likely to behave the way you do. That is not the lesson you actually want to be teaching.
a
a
a
a
a
a
a
a
a
a
a
a
a
a
a
a
a
a
a
a
a
a
a
a
a
a
a
a
a
a
a
a
a
a
a
a
a
a
a
a
a
a
a
a
a
a
a
a
a
a
a
a
a
a
a
Most of them are over-and-over-again reposts from the same person, who was banned years ago for abusive behaviour and keeps coming back and being banned again (and creating huge numbers of sockpuppets to downvote people he doesn’t like, which is the main reason why there are currently no downvotes on LW).
[EDITED to clarify:] So what happens is: he posts something, his account gets zapped because he’s supposed to be banned and his comments are deleted, and then he makes a new account and posts all the same things again, repeat ad nauseam. I can’t imagine he thinks he’s actually doing something constructive; I think he does it as a sort of fuck-you to the LW admins.
That would be the person you recently said “Did we just become best friends?” to, by the way.
a
a
a
a
a
a
a
a
a
a
a
a
a
a
a
a
a
a
a
a
a
a
a
a
a
a
a
a
a
a
a
a
a
a
a
a
a
a
a
a
a
a
a
a
Yet, you somehow decided that noting it in response to the great-grandparent was worth your while.
a
a
a
a
a
a
a
a
a
a
a
I want to take a look at his data before I believe him.